18 research outputs found

    On the Effectiveness of Tools to Support Infrastructure as Code: Model-Driven Versus Code-Centric

    Full text link
    [EN] Infrastructure as Code (IaC) is an approach for infrastructure automation that is based on software development practices. The IaC approach supports code-centric tools that use scripts to specify the creation, updating and execution of cloud infrastructure resources. Since each cloud provider offers a different type of infrastructure, the definition of an infrastructure resource (e.g., a virtual machine) implies writing several lines of code that greatly depend on the target cloud provider. Model-driven tools, meanwhile, abstract the complexity of using IaC scripts through the high-level modeling of the cloud infrastructure. In a previous work, we presented an infrastructure modeling approach and tool (Argon) for cloud provisioning that leverages model-driven engineering and supports the IaC approach. The objective of the present work is to compare a model-driven tool (Argon) with a well-known code-centric tool (Ansible) in order to provide empirical evidence of their effectiveness when defining the cloud infrastructure, and the participants & x2019; perceptions when using these tools. We, therefore, conducted a family of three experiments involving 67 Computer Science students in order to compare Argon with Ansible as regards their effectiveness, efficiency, perceived ease of use, perceived usefulness, and intention to use. We used the AB/BA crossover design to configure the individual experiments and the linear mixed model to statistically analyze the data collected and subsequently obtain empirical findings. The results of the individual experiments and meta-analysis indicate that Argon is more effective as regards supporting the IaC approach in terms of defining the cloud infrastructure. The participants also perceived that Argon is easier to use and more useful for specifying the infrastructure resources. Our findings suggest that Argon accelerates the provisioning process by modeling the cloud infrastructure and automating the generation of scripts for different DevOps tools when compared to Ansible, which is a code-centric tool that is greatly used in practice.This work was supported by the Ministry of Science, Innovation, and Universities (Adapt@Cloud project), Spain, under Grant TIN2017-84550-R. The work of Julio Sandobalin was supported by the Escuela Politecnica Nacional, Ecuador.SandobalĂ­n, J.; Insfran, E.; Abrahao Gonzales, SM. (2020). On the Effectiveness of Tools to Support Infrastructure as Code: Model-Driven Versus Code-Centric. IEEE Access. 8:17734-17761. https://doi.org/10.1109/ACCESS.2020.2966597S1773417761

    A Smart Provisioning Approach to Cloud Infrastructure

    Full text link
    [EN] Cloud Computing has become the primary model to pay-per-use used by practitioners and researchers for getting an infrastructure in a short time. DevOps is a paradigm that provides practices and tools to optimize the software delivery time. On the one hand, Infrastructure as Code is the cornerstone of DevOps to infrastructure automation based on practices of software development. On the other hand, there exist tools that provide support for infrastructure provisioning and configuration management in the Cloud. Currently, companies are using Cloud-based DevOps processes to leverage the capacities of Cloud Computing and improve the software delivery time. However, infrastructure provisioning is a time-consuming and error-prone activity because it is supported by tools which work in isolation and they do not have any link between them. For this reason, we propose a smart provisioning approach of infrastructure based on models, which have configuration needed to create infrastructure, run a set of automated tests and setup automatically the DevOps tools used in this process.This research is supported by the Value@Cloud project (TIN2013- 46300-R)Sandobalin Guaman, JC.; Insfran, E.; Abrahao Gonzales, SM. (2018). A Smart Provisioning Approach to Cloud Infrastructure. Journal of Computers. 29(6):229-233. https://doi.org/10.3966/199115992018122906022S22923329

    Defining and validating a multimodel approach for product architecture derivation and improvement

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-41533-3_24Software architectures are the key to achieving the non-functional requirements (NFRs) in any software project. In software product line (SPL) development, it is crucial to identify whether the NFRs for a specific product can be attained with the built-in architectural variation mechanisms of the product line architecture, or whether additional architectural transformations are required. This paper presents a multimodel approach for quality-driven product architecture derivation and improvement (QuaDAI). A controlled experiment is also presented with the objective of comparing the effectiveness, efficiency, perceived ease of use, intention to use and perceived usefulness with regard to participants using QuaDAI as opposed to the Architecture Tradeoff Analysis Method (ATAM). The results show that QuaDAI is more efficient and perceived as easier to use than ATAM, from the perspective of novice software architecture evaluators. However, the other variables were not found to be statistically significant. Further replications are needed to obtain more conclusive results.This research is supported by the MULTIPLE project (MICINN TIN2009-13838) and the Vali+D fellowship program (ACIF/2011/235).González Huerta, J.; Insfrán Pelozo, CE.; Abrahao Gonzales, SM. (2013). Defining and validating a multimodel approach for product architecture derivation and improvement. En Model-Driven Engineering Languages and Systems. Springer. 388-404. https://doi.org/10.1007/978-3-642-41533-3_24S388404Ali-Babar, M., Lago, P., Van Deursen, A.: Empirical research in software architecture: opportunities, challenges, and approaches. Empirical Software Engineering 16(5), 539–543 (2011)Ali-Babar, M., Zhu, L., Jeffery, R.: A Framework for Classifying and Comparing Software Architecture Evaluation Methods. In: 15th Australian Software Engineering Conference, Melbourne, Australia, pp. 309–318 (2004)Basili, V.R., Rombach, H.D.: The TAME project: towards improvement-oriented software environments. IEEE Transactions on Software Engineering 14(6), 758–773 (1988)Barkmeyer, E.J., Feeney, A.B., Denno, P., Flater, D.W., Libes, D.E., Steves, M.P., Wallace, E.K.: Concepts for Automating Systems Integration NISTIR 6928. National Institute of Standards and Technology, U.S. Dept. of Commerce (2003)Bosch, J.: Design and Use of Software Architectures. Adopting and Evolving Product-Line Approach. Addison-Wesley, Harlow (2000)Botterweck, G., O’Brien, L., Thiel, S.: Model-driven derivation of product architectures. In: 22th Int. Conf. on Automated Software Engineering, New York, USA, pp. 469–472 (2007)Buschmann, F., Meunier, R., Rohnert, H., Sommerlad, P., Stal, M.: Pattern-Oriented software architecture, vol. 1: A System of Patterns. Wiley (1996)Cabello, M.E., Ramos, I., Gómez, A., Limón, R.: Baseline-Oriented Modeling: An MDA Approach Based on Software Product Lines for the Expert Systems Development. In: 1st Asia Conference on Intelligent Information and Database Systems, Vietnam (2009)Carifio, J., Perla, R.J.: Ten Common Misunderstandings, Misconceptions, Persistent Myths and Urban Legends about Likert Scales and Likert Response Formats and their Antidotes. Journal of Social Sciences 3(3), 106–116 (2007)Clements, P., Northrop, L.: Software Product Lines: Practices and Patterns. Addison-Wesley, Boston (2007)Czarnecki, K., Kim, C.H.: Cardinality-based feature modeling and constraints: A progress report. In: Int. Workshop on Software Factories, San Diego-CA (2005)Datorro, J.: Convex Optimization & Euclidean Distance Geometry. Meboo Publishing (2005)Davis, F.D.: Perceived usefulness, perceived ease of use and user acceptance of information technology. MIS Quarterly 13(3), 319–340 (1989)Douglass, B.P.: Real-Time Design Patterns: Robust Scalable Architecture for Real-Time Systems. Addison-Wesley, Boston (2002)Feiler, P.H., Gluch, D.P., Hudak, J.: The Architecture Analysis & Design Language (AADL): An Introduction. Tech. Report CMU/SEI-2006-TN-011. SEI, Carnegie Mellon University (2006)Gómez, A., Ramos, I.: Cardinality-based feature modeling and model-driven engineering: Fitting them together. In: 4th Int. Workshop on Variability Modeling of Software Intensive Systems, Linz, Austria (2010)Gonzalez-Huerta, J., Insfran, E., Abrahao, S.: A Multimodel for Integrating Quality Assessment in Model-Driven Engineering. In: 8th International Conference on the Quality of Information and Communications Technology (QUATIC 2012), Lisbon, Portugal, September 3-6 (2012)Gonzalez-Huerta, J., Insfran, E., Abrahao, S., McGregor, J.D.: Non-functional Requirements in Model-Driven Software Product Line Engineering. In: 4th Int. Workshop on Non-functional System Properties in Domain Specific Modeling Languages, Insbruck, Austria (2012)Guana, V., Correal, V.: Variability quality evaluation on component-based software product lines. In: 15th Int. Software Product Line Conference, Munich, Germany, vol. 2, pp. 19.1–19.8 (2011)Insfrán, E., Abrahão, S., González-Huerta, J., McGregor, J.D., Ramos, I.: A Multimodeling Approach for Quality-Driven Architecture Derivation. In: 21st Int. Conf. on Information Systems Development (ISD 2012), Prato, Italy (2012)ISO/IEC 25000:2005, Software Engineering. Software product Quality Requirements and Evaluation SQuaRE (2005)Kazman, R., Klein, M., Clements, P.: ATAM: Method for Architecture Evaluation (CMU/SEI-2000-TR-004, ADA382629). Software Engineering Institute, Carnegie Mellon University, Pittsburgh (2000), http://www.sei.cmu.edu/publications/documents/00.reports/00tr004.htmlKim, T., Ko, I., Kang, S., Lee, D.: Extending ATAM to assess product line architecture. In: 8th IEEE Int. Conference on Computer and Information Technology, Sydney, Australia, pp. 790–797 (2008)Kitchenham, B.A., Pfleeger, S.L., Hoaglin, D.C., Rosenber, J.: Preliminary Guidelines for Empirical Research in Software Engineering. IEEE Transactions on Software Engineering 28(8) (2002)Kruchten, P.B.: The Rational Unified Process: An Introduction. Addison-Wesley (1999)Martensson, F.: Software Architecture Quality Evaluation. Approaches in an Industrial Context. Ph. D. thesis, Blekinge Institute of Technology, Karlskrona, Sweden (2006)Maxwell, K.: Applied Statistics for Software Managers. Software Quality Institute Series. Prentice-Hall (2002)Olumofin, F.G., Mišic, V.B.: A holistic architecture assessment method for software product lines. Information and Software Technology 49, 309–323 (2007)Perovich, D., Rossel, P.O., Bastarrica, M.C.: Feature model to product architectures: Applying MDE to Software Product Lines. In: IEEE/IFIP & European Conference on Software Architecture, Helsinki, Findland, pp. 201–210 (2009)Robertson, S., Robertson, J.: Mastering the requirements process. ACM Press, New York (1999)Roos-Frantz, F., Benavides, D., Ruiz-Cortés, A., Heuer, A., Lauenroth, K.: Quality-aware analysis in product line engineering with the orthogonal variability model. Software Quality Journal (2011), doi:10.1007/s11219-011-9156-5Saaty, T.L.: The Analytical Hierarchical Process. McGraw- Hill, New York (1990)Taher, L., Khatib, H.E., Basha, R.: A framework and QoS matchmaking algorithm for dynamic web services selection. In: 2nd Int. Conference on Innovations in Information Technology, Dubai, UAE (2005)Wohlin, C., Runeson, P., Host, M., Ohlsson, M.C., Regnell, B., Weslen, A.: Experimentation in Software Engineering - An Introduction. Kluwer (2000

    A systematic review of quality attributes and measures for software product lines

    Full text link
    [EN] It is widely accepted that software measures provide an appropriate mechanism for understanding, monitoring, controlling, and predicting the quality of software development projects. In software product lines (SPL), quality is even more important than in a single software product since, owing to systematic reuse, a fault or an inadequate design decision could be propagated to several products in the family. Over the last few years, a great number of quality attributes and measures for assessing the quality of SPL have been reported in literature. However, no studies summarizing the current knowledge about them exist. This paper presents a systematic literature review with the objective of identifying and interpreting all the available studies from 1996 to 2010 that present quality attributes and/or measures for SPL. These attributes and measures have been classified using a set of criteria that includes the life cycle phase in which the measures are applied; the corresponding quality characteristics; their support for specific SPL characteristics (e. g., variability, compositionality); the procedure used to validate the measures, etc. We found 165 measures related to 97 different quality attributes. The results of the review indicated that 92% of the measures evaluate attributes that are related to maintainability. In addition, 67% of the measures are used during the design phase of Domain Engineering, and 56% are applied to evaluate the product line architecture. However, only 25% of them have been empirically validated. In conclusion, the results provide a global vision of the state of the research within this area in order to help researchers in detecting weaknesses, directing research efforts, and identifying new research lines. In particular, there is a need for new measures with which to evaluate both the quality of the artifacts produced during the entire SPL life cycle and other quality characteristics. There is also a need for more validation (both theoretical and empirical) of existing measures. In addition, our results may be useful as a reference guide for practitioners to assist them in the selection or the adaptation of existing measures for evaluating their software product lines. © 2011 Springer Science+Business Media, LLC.This research has been funded by the Spanish Ministry of Science and Innovation under the MULTIPLE (Multimodeling Approach For Quality-Aware Software Product Lines) project with ref. TIN2009-13838.Montagud Gregori, S.; Abrahao Gonzales, SM.; Insfrán Pelozo, CE. (2012). A systematic review of quality attributes and measures for software product lines. Software Quality Journal. 20(3-4):425-486. https://doi.org/10.1007/s11219-011-9146-7S425486203-4Abdelmoez, W., Nassar, D. M., Shereschevsky, M., Gradetsky, N., Gunnalan, R., Ammar, H. H., et al. (2004). Error propagation in software architectures. In 10th international symposium on software metrics (METRICS), Chicago, Illinois, USA.Ajila, S. A., & Dumitrescu, R. T. (2007). Experimental use of code delta, code churn, and rate of change to understand software product line evolution. Journal of Systems and Software, 80, 74–91.Aldekoa, G., Trujillo, S., Sagardui, G., & Díaz, O. (2006). Experience measuring maintainability in software product lines. In XV Jornadas de Ingeniería del Software y Bases de Datos (JISBD). Barcelona.Aldekoa, G., Trujillo, S., Sagardui, G., & Díaz, O. (2008). Quantifying maintanibility in feature oriented product lines, Athens, Greece, pp. 243–247.Alves de Oliveira Junior, E., Gimenes, I. M. S., & Maldonado, J. C. (2008). A metric suite to support software product line architecture evaluation. In XXXIV Conferencia Latinamericana de Informática (CLEI), Santa Fé, Argentina, pp. 489–498.Alves, V., Niu, N., Alves, C., & Valença, G. (2010). Requirements engineering for software product lines: A systematic literature review. Information & Software Technology, 52(8), 806–820.Bosch, J. (2000). Design and use of software architectures: Adopting and evolving a product line approach. USA: ACM Press/Addison-Wesley Publishing Co.Briand, L. C., Differing, C. M., & Rombach, D. (1996a). Practical guidelines for measurement-based process improvement. Software Process-Improvement and Practice, 2, 253–280.Briand, L. C., Morasca, S., & Basili, V. R. (1996b). Property based software engineering measurement. IEEE Transactions on Software Eng., 22(1), 68–86.Calero, C., Ruiz, J., & Piattini, M. (2005). Classifying web metrics using the web quality model. Online Information Review, 29(3): 227–248.Chen, L., Ali Babar, M., & Ali, N. (2009). Variability management in software product lines: A systematic review. In 13th international software product lines conferences (SPLC), San Francisco, USA.Clements, P., & Northrop, L. (2002). Software product lines. 2003. Software product lines practices and patterns. Boston, MA: Addison-Wesley.Crnkovic, I., & Larsson, M. (2004). Classification of quality attributes for predictability in component-based systems. Journal of Econometrics, pp. 231–250.Conference Rankings of Computing Research and Education Association of Australasia (CORE). (2010). Available in http://core.edu.au/index.php/categories/conference%20rankings/1 .Davis, A., Dieste, Ó., Hickey, A., Juristo, N., & Moreno, A. M. (2006). Effectiveness of requirements elicitation techniques: Empirical results derived from a systematic review. In 14th IEEE international conference requirements engineering, pp. 179–188.de Souza Filho, E. D., de Oliveira Cavalcanti, R., Neiva, D. F. S., Oliveira, T. H. B., Barachisio Lisboa, L., de Almeida E. S., & de Lemos Meira, S. R. (2008). Evaluating domain design approaches using systematic review. In 2nd European conference on software architecture, Cyprus, pp. 50–65.Ejiogu, L. (1991). Software engineering with formal metrics. QED Publishing.Engström, E., & Runeson, P. (2011). Software product line testing—A systematic mapping study. Information & Software Technology, 53(1), 2–13.Etxeberria, L., Sagarui, G., & Belategi, L. (2008). Quality aware software product line engineering. Journal of the Brazilian Computer Society, 14(1), Campinas Mar.Ganesan, D., Knodel, J., Kolb, R., Haury, U., & Meier, G. (2007). Comparing costs and benefits of different test strategies for a software product line: A study from Testo AG. In 11th international software product line conference, Kyoto, Japan, pp. 74–83, September 2007.Gómez, O., Oktaba, H., Piattini, M., & García, F. (2006). A systematic review measurement in software engineering: State-of-the-art in measures. In First international conference on software and data technologies (ICSOFT), Setúbal, Portugal, pp. 11–14.IEEE standard for a software quality metrics methodology, IEEE Std 1061-1998, 1998.Inoki, M., & Fukazawa, Y. (2007). Software product line evolution method based on Kaizen approach. In 22nd annual ACM symposium on applied computing, Korea.Insfran, E., & Fernandez, A. (2008). A systematic review of usability evaluation in Web development. 2nd international workshop on web usability and accessibility (IWWUA’08), New Zealand, LNCS 5176, Springer, pp. 81–91.ISO/IEC 25010. (2008). Systems and software engineering. Systems and software Quality Requirements and Evaluation (SQuaRE). System and software quality models.ISO/IEC 9126. (2000). Software engineering. Product Quality.Johansson, E., & Höst, R. (2002). Tracking degradation in software product lines through measurement of design rule violations. In 14th International conference on software engineering and knowledge engineering, Ischia, Italy, pp. 249–254.Journal Citation Reports of Thomson Reuters. (2010). Available in http://thomsonreuters.com/products_services/science/science_products/a-z/journal_citation_reports/ .Khurum, M., & Gorschek, T. (2009). A systematic review of domain analysis solutions for product lines. The Journal of Systems and Software.Kim, T., Ko, I. Y., Kang, S. W., & Lee, D. H. (2008). Extending ATAM to assess product line architecture. In 8th IEEE international conference on computer and information technology, pp. 790–797.Kitchenham, B. (2007). Guidelines for performing systematic literature reviews in software engineering. Version 2.3, EBSE Technical Report, Keele University, UK.Kitchenham, B., Pfleeger, S., & Fenton, N. (1995). Towards a framework for software measurement validation. IEEE Transactions on Software Engineering, 21(12).Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174.Mendes, E. (2005). A systematic review of Web engineering research. International symposium on empirical software engineering. Noosa Heads, Australia.Meyer, M. H., & Dalal, D. (2002). Managing platform architectures and manufacturing processes for non assembled products. Journal of Product Innovation Management, 19(4), 277–293.Montagud, S., & Abrahão, S. (2009). Gathering Current knowledge about quality evaluation in software product lines. In 13th international software product lines conferences (SPLC), San Francisco, USA.Montagud, S., & Abrahão, S. (2009). A SQuaRE-bassed quality evaluation method for software product lines. Master’s thesis, December 2009 (in Spanish).Needham, D., & Jones, S. (2006). A software fault tree metric. In 22nd international conference on software maintenance (ICSM), Philadelphia, Pennsylvania, USA.Niemelä, E., & Immonen, A. (2007). Capturing quality requirements of product family architecture. Information and Software Technology, 49(11–12), 1107–1120.Odia, O. E. (2007). Testing in software product lines. Master Thesis Software Engineering of School of Engineering, Bleking Institute of Technology. Thesis no. MSE-2007:16, Sweden.Olumofin, F. G., & Mišić, V. B. (2007). A holistic architecture assessment method for software product lines. Information and Software Technology, 49, 309–323.Pérez Lamancha, B., Polo Usaola, M., & Piattini Velthius, M. (2009). Software product line testing—a systematic review. ICSOFT, (1), 23–30.Poels, G., & Dedene, G. (2000). Distance-based software measurement: necessary and sufficient properties for software measures. Information and Software Technology, 42(I), 35–46.Prehofer, C., van Gurp, J., & Bosch, J. (2008). Compositionality in software platforms. In Emerging methods, technologies and process management in software engineering. Wiley.Rahman, A. (2004). Metrics for the structural assessment of product line architecture. Master Thesis on Software Engineering, Thesis no. MSE-2004:24. School of Engineering, Blekinge Institute of Technology, Sweden.Sethi, K., Cai, Y., Wong, S., Garcia, A., & Sant’Anna, C. (2009). From retrospect to prospect: Assessing modularity and stability from software architecture. Joint working IEEE/IFIP conference on software architecture, 2009 & European conference on software architecture. WICSA/ECSA.Shaik, I., Abdelmoez, W,. Gunnalan, R., Shereshevsky, M., Zeid, A., Ammar, H. H., et al. (2005). Change propagation for assessing design quality of software architectures. 5th working IEEE/IFIP conference on software architecture (WICSA’05).Siegmund, N., Rosenmüller, M., Kuhlemann, M., Kästner, C., & Saake, G. (2008). Measuring non-functional properties in software product lines for product derivation. In 15th Asia-Pacific software engineering conference, Beijing, China.Sun Her, J., Hyeok Kim, J., Hun Oh, S., Yul Rhew, S., & Dong Kim, S. (2007). A framework for evaluating reusability of core asset in product line engineering. Information and Software Technology, 49, 740–760.Svahnberg, M., & Bosch, J. (2000). Evolution in software product lines. In 3rd international workshop on software architectures for products families (IWSAPF-3). Las Palmas de Gran Canaria.Van der Hoek, A., Dincel, E., & Medidović, N. (2003). Using services utilization metrics to assess the structure of product line architectures. In 9th international software metrics symposium (METRICS), Sydney, Australia.Van der Linden, F., Schmid, K., & Rommes, E. (2007). Software product lines in action. Springer.Whitmire, S. (1997). Object oriented design measurement. John Wiley & Sons.Wnuk, K., Regnell, B., & Karlsson, L. (2009). What happened to our features? Visualization and understanding of scope change dynamics in a large-scale industrial setting. In 17th IEEE international requirements engineering conference.Yoshimura, K., Ganesan, D., & Muthig, D. (2006). Assessing merge potential of existing engine control systems into a product line. In International workshop on software engineering for automative systems, Shangai, China, pp. 61–67.Zhang, T., Deng, L., Wu, J., Zhou, Q., & Ma, C. (2008). Some metrics for accessing quality of product line architecture. In International conference on computer science and software engineering (CSSE), Wuhan, China, pp. 500–503

    Innovación docente para el desarrollo de la competencia transversal “Conocimiento de problemas contemporáneos” en el marco de una asignatura de Calidad de Software

    Full text link
    [EN] This paper presents a teaching innovation based on the implementation of different training activities for the development of the competency "CT-10 Knowledge of Contemporary Problems" of the Transversal Competences Project of the Universitat Politècnica de València (UPV). The innovation is based on the use of different training activities conducted in the context of a course on Software Quality of the Bachelor's Degree in Informatics Engineering at the UPV. In particular, Projects, Report Writing and Seminar have been carried out to present the results of practical activities, close to the work that will be conducted by a software quality engineer in a business environment. We present the results of the competence acquisition by the students. A survey has also been carried out in order to assess the students’ opinion about the teaching activities. Finally, we describe our lessons learned and posible improvements to the training activities based on the experience gained.[ES] En este artículo se presenta una innovación docente basada en la puesta enmarcha de diferentes actividades formativas para poder desarrollar la competencia “CT-10 Conocimiento de problemas contemporáneos” del Proyecto de Competencias Transversales de la Universitat Politècnica de València (UPV). La innovación se basa en la utilización de diferentes actividades formativas dentro de la asignatura Calidad de Software del Grado de Ingeniería Informática de la UPV. En particular, se ha realizado proyectos, redacción de informes y exposición oral para presentar los resultados de actividades de carácter práctico cercanas al trabajo que realizaría un ingeniero de calidad de software en un entorno profesional. Se presentan los resultados de la adquisición de la competencia por parte de los alumnos y de una encuesta realizada para conocer su opinión sobre las actividades docentes. Finalmente, se describen las lecciones aprendidas y posibles mejoras al programa de actividades formativas en base a la obtenida.Abrahao Gonzales, SM.; Insfrán Pelozo, CE. (2019). Innovación docente para el desarrollo de la competencia transversal “Conocimiento de problemas contemporáneos” en el marco de una asignatura de Calidad de Software. En IN-RED 2019. V Congreso de Innovación Educativa y Docencia en Red. Editorial Universitat Politècnica de València. 1302-1315. https://doi.org/10.4995/INRED2019.2019.10342OCS1302131

    Validating a model-driven software architecture evaluation and improvement method: A family of experiments

    Full text link
    Context: Software architectures should be evaluated during the early stages of software development in order to verify whether the non-functional requirements (NFRs) of the product can be fulfilled. This activity is even more crucial in software product line (SPL) development, since it is also necessary to identify whether the NFRs of a particular product can be achieved by exercising the variation mechanisms provided by the product line architecture or whether additional transformations are required. These issues have motivated us to propose QuaDAI, a method for the derivation, evaluation and improvement of software architectures in model-driven SPL development. Objective: We present in this paper the results of a family of four experiments carried out to empirically validate the evaluation and improvement strategy of QuaDAI. Method: The family of experiments was carried out by 92 participants: Computer Science Master s and undergraduate students from Spain and Italy. The goal was to compare the effectiveness, efficiency, perceived ease of use, perceived usefulness and intention to use with regard to participants using the evaluation and improvement strategy of QuaDAI as opposed to the Architecture Tradeoff Analysis Method (ATAM). Results: The main result was that the participants produced their best results when applying QuaDAI, signifying that the participants obtained architectures with better values for the NFRs faster, and that they found the method easier to use, more useful and more likely to be used. The results of the meta-analysis carried out to aggregate the results obtained in the individual experiments also confirmed these results. Conclusions: The results support the hypothesis that QuaDAI would achieve better results than ATAM in the experiments and that QuaDAI can be considered as a promising approach with which to perform architectural evaluations that occur after the product architecture derivation in model-driven SPL development processes when carried out by novice software evaluators.The authors would like to thank all the participants in the experiments for their selfless involvement in this research. This research is supported by the MULTIPLE Project (MICINN TIN2009-13838) and the ValI+D Program (ACIF/2011/235).González Huerta, J.; Insfrán Pelozo, CE.; Abrahao Gonzales, SM.; Scanniello, G. (2015). Validating a model-driven software architecture evaluation and improvement method: A family of experiments. Information and Software Technology. 57:405-429. https://doi.org/10.1016/j.infsof.2014.05.018S4054295

    Assessing the effectiveness of sequence diagrams in the comprehension of functional requirements: results from a family of five experiments

    Full text link
    Modeling is a fundamental activity within the requirements engineering process and concerns the construction of abstract descriptions of requirements that are amenable to interpretation and validation. The choice of a modeling technique is critical whenever it is necessary to discuss the interpretation and validation of requirements. This is particularly true in the case of functional requirements and stakeholders with divergent goals and different backgrounds and experience. This paper presents the results of a family of experiments conducted with students and professionals to investigate whether the comprehension of functional requirements is influenced by the use of dynamic models that are represented by means of the UML sequence diagrams. The family contains five experiments performed in different locations and with 112 participants of different abilities and levels of experience with UML. The results show that sequence diagrams improve the comprehension of the modeled functional requirements in the case of high ability and more experienced participants.The authors wish to thank all the participants in the experiments. This research was partially supported by the MULTIPLE project (with ref. TIN2009-13838).Abrahao Gonzales, SM.; Gravino, .C.; Insfrán Pelozo, CE.; Scaniello, .G.; Tortora, .G. (2013). Assessing the effectiveness of sequence diagrams in the comprehension of functional requirements: results from a family of five experiments. IEEE Transactions on Software Engineering. 39(3):327-342. https://doi.org/10.1109/TSE.2012.27S32734239

    Towards a monitoring middleware for cloud services

    Full text link
    © 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Cloud Computing represents a new trend in the development and use of software. Many organizations are currently adopting the use of services that are hosted in the cloud by employing the Software as a Service (SaaS) model. Services are typically accompanied by a Service Level Agreement (SLA), which defines the quality terms that a provider offers to its customers. Many monitoring tools have been proposed to report compliance with the SLA. However, they have some limitations when changes to monitoring requirements must be made and because of the complexity involved in capturing low-level raw data from services at runtime. In this paper, we propose the design of a platform-independent monitoring middleware for cloud services, which supports the monitoring of SLA compliance and provides a report containing SLA violations that may help stakeholders to make decisions regarding how to improve the quality of cloud services. Moreover, our middleware definition is based on the use of [email protected], which allows the dynamic change of quality requirements and/or the dynamic selection of different metric operationalizations (i.e., calculation formulas) with which to measure the quality of services. In order to demonstrate the feasibility of our approach, we show the instantiation of the proposed middleware that can be used to monitor services when deployed on the Microsoft Azure© platform.This research is supported by the Value@Cloud project (TIN2013-46300-R); the Scholarship Program Senescyt - Ecuador; University of Cuenca – Ecuador; and the Microsoft Azure for Research Award ProgramCedillo Orellana, IP.; JimĂ©nez GĂłmez, J.; Abrahao Gonzales, SM.; Insfrán Pelozo, CE. (2015). Towards a monitoring middleware for cloud services. IEEE. https://doi.org/10.1109/SCC.2015.68

    A Taxonomy of Quality Metrics for Cloud Services

    Full text link
    [EN] A large number of metrics with which to assess the quality of cloud services have been proposed over the last years. However, this knowledge is still dispersed, and stakeholders have little or no guidance when choosing metrics that will be suitable to evaluate their cloud services. The objective of this paper is, therefore, to systematically identify, taxonomically classify, and compare existing quality of service (QoS) metrics in the cloud computing domain. We conducted a systematic literature review of 84 studies selected from a set of 4333 studies that were published from 2006 to November 2018. We specifically identified 470 metric operationalizations that were then classified using a taxonomy, which is also introduced in this paper. The data extracted from the metrics were subsequently analyzed using thematic analysis. The findings indicated that most metrics evaluate quality attributes related to performance efficiency (64%) and that there is a need for metrics that evaluate other characteristics, such as security and compatibility. The majority of the metrics are used during the Operation phase of the cloud services and are applied to the running service. Our results also revealed that metrics for cloud services are still in the early stages of maturity only 10% of the metrics had been empirically validated. The proposed taxonomy can be used by practitioners as a guideline when specifying service level objectives or deciding which metric is best suited to the evaluation of their cloud services, and by researchers as a comprehensive quality framework in which to evaluate their approaches.This work was supported by the Spanish Ministry of Science, Innovation and Universities through the Adapt@Cloud Project under Grant TIN2017-84550-R. The work of Ximena Guerron was supported in part by the Universidad Central del Ecuador (UCE), and in part by the Banco Central del Ecuador.Guerron, X.; Abrahao Gonzales, SM.; Insfran, E.; Fernández-Diego, M.; González-Ladrón-De-Guevara, F. (2020). A Taxonomy of Quality Metrics for Cloud Services. IEEE Access. 8:131461-131498. https://doi.org/10.1109/ACCESS.2020.3009079S131461131498

    An Update on Effort Estimation in Agile Software Development: A Systematic Literature Review

    Full text link
    [EN] Software developers require effective effort estimation models to facilitate project planning. Although Usman et al. systematically reviewed and synthesized the effort estimation models and practices for Agile Software Development (ASD) in 2014, new evidence may provide new perspectives for researchers and practitioners. This article presents a systematic literature review that updates the Usman et al. study from 2014 to 2020 by analyzing the data extracted from 73 new papers. This analysis allowed us to identify six agile methods: Scrum, Xtreme Programming and four others, in all of which expert-based estimation methods continue to play an important role. This is particularly the case of Planning Poker, which is very closely related to the most frequently used size metric (story points) and the way in which software requirements are specified in ASD. There is also a remarkable trend toward studying techniques based on the intensive use of data. In this respect, although most of the data originate from single-company datasets, there is a significant increase in the use of cross-company data. With regard to cost factors, we applied the thematic analysis method. The use of team and project factors appears to be more frequent than the consideration of more technical factors, in accordance with agile principles. Finally, although accuracy is still a challenge, we identified that improvements have been made. On the one hand, an increasing number of papers showed acceptable accuracy values, although many continued to report inadequate results. On the other, almost 29% of the papers that reported the accuracy metric used reflected aspects concerning the validation of the models and 18% reported the effect size when comparing models.This work was supported by the Spanish Ministry of Science, Innovation and Universities through the Adapt@Cloud Project under Grant TIN2017-84550-R.Fernández-Diego, M.; Méndez, ER.; González-Ladrón-De-Guevara, F.; Abrahao Gonzales, SM.; Insfran, E. (2020). An Update on Effort Estimation in Agile Software Development: A Systematic Literature Review. IEEE Access. 8:166768-166800. https://doi.org/10.1109/ACCESS.2020.3021664S166768166800
    corecore